Skip to content

Add exception handler on HTTP/2 parent channel to suppress WARN logs#48890

Merged
jeet1995 merged 17 commits intoAzure:mainfrom
jeet1995:AzCosmos_Http2ParentChannelExceptionHandler
Apr 24, 2026
Merged

Add exception handler on HTTP/2 parent channel to suppress WARN logs#48890
jeet1995 merged 17 commits intoAzure:mainfrom
jeet1995:AzCosmos_Http2ParentChannelExceptionHandler

Conversation

@jeet1995
Copy link
Copy Markdown
Member

@jeet1995 jeet1995 commented Apr 21, 2026

Problem

Customers see noisy Netty WARN logs in HTTP/2 scenarios:

An exceptionCaught() event was fired, and it reached at the tail of the pipeline.
io.netty.channel.unix.Errors$NativeIoException: recvAddress(..) failed with error(-104): Connection reset by peer

Root Cause

In HTTP/2, reactor-netty multiplexes streams on a shared parent TCP connection. The parent and child channels have different pipeline structures:

HTTP/1.1 pipeline (single channel — no leak to TailContext):

SslHandler → HttpClientCodec → ChannelOperationsHandler → [TAIL]
                                         ↑
                            Catches exceptions, bridges to
                            Reactor subscriber. Exception
                            never reaches TailContext.

HTTP/2 parent channel pipeline (BEFORE fix — leak to TailContext):

SslHandler → Http2FrameCodec → Http2MultiplexHandler → [TAIL]
                                                          ↑
                                            No handler catches it.
                                            TailContext logs WARN.

HTTP/2 parent channel pipeline (AFTER fix):

SslHandler → Http2FrameCodec → Http2MultiplexHandler → Http2ParentChannelExceptionHandler → [TAIL]
                                                                   ↑
                                                       Consumes Exception types.
                                                       Log level based on connection state.
                                                       Error types propagate to TailContext.

HTTP/2 child stream channel pipeline (unchanged):

H2ToHttp11Codec → IdleStateHandler → ChannelOperationsHandler → [TAIL]
                                              ↑
                               Same as HTTP/1.1 — catches exceptions,
                               bridges to Reactor subscriber.

Design: Connection-State-Based Log Level

The handler consumes Exception types on the parent channel (no exception type filtering within Exception). Error types (e.g., OutOfMemoryError) are not consumed — they propagate to TailContext for standard Netty handling.

The log level for Exception types is determined by connection state:

  • DEBUG — when activeStreams == 0 OR !channelActive.
  • WARN — when active streams exist on a live channel, or when the active stream count cannot be determined (null).
Active streams Channel active Log level Rationale
0 true/false DEBUG Idle connection — no in-flight requests affected
>0 false DEBUG Channel already dead — streams will fail via subscriber
>0 true WARN Live requests may be affected
null (codec unavailable) true WARN Cannot determine state — safe default
null (codec unavailable) false DEBUG Channel already dead regardless

Active stream count is retrieved via Http2FrameCodec.connection().numActiveStreams() on the same parent channel pipeline. Returns null (not -1) if the codec is unavailable, making the unknown-state case explicit. Failures to retrieve the stream count are logged at DEBUG.

Why no exception type filtering?

By the time any exception reaches our handler, all upstream handlers (Http2FrameCodec, Http2MultiplexHandler) have already handled the protocol actions (GOAWAY, stream reset, child channel error delivery). The exception reaching TailContext is an echo of already-handled work, regardless of type. Connection state (active streams + channel activity) is the only dimension that determines whether the exception has diagnostic value.

Why OR (not AND) for the DEBUG condition?

Either condition alone is sufficient:

  • activeStreams == 0 — no in-flight requests affected, regardless of channel state
  • !channelActive — channel is already dead, any active streams will fail through their Reactor subscribers independently

Why no ctx.close()?

The handler does NOT close the channel. Connection lifecycle is owned by:

  • Netty transport layer — detects TCP RST/EOF, transitions channel to inactive
  • Http2FrameCodec — processes GOAWAY, resets streams
  • Http2Pool (reactor-netty) — evicts connections based on !channel.isActive(), GOAWAY, maxIdleTime, maxLifeTime

Our handler is the last in the pipeline with the least protocol context. Closing here would race with reactor-netty's pool management and could prematurely kill connections after non-fatal errors.

Why propagate Error but not Exception?

Error types (OOM, StackOverflowError) represent JVM-level failures that should not be silently consumed. Re-throwing Exception to TailContext provides no functional value — TailContext just logs WARN and swallows, which is what our handler already does with better context (connection state in the log message).

Testing

9 EmbeddedChannel unit tests with production-matching pipeline (Http2FrameCodecHttp2MultiplexHandler → handler):

Test What it proves
withoutHandler_exceptionReachesTail BEFORE: exception reaches TailContext → WARN
withHandler_zeroActiveStreams_consumedAtDebug 0 active streams → consumed at DEBUG
withHandler_exceptionDoesNotCloseChannel Handler does NOT close channel
withHandler_runtimeException_zeroActiveStreams_consumed RuntimeException also consumed (no type filtering)
withHandler_npe_zeroActiveStreams_consumed NPE also consumed (no type filtering)
withHandler_activeStreams_consumedAtWarn Active streams → consumed at WARN
withHandler_activeStreams_channelNotClosed Active streams + exception → channel NOT closed
withHandler_codecAbsent_fallsBackToWarnPath Codec absent → null stream count → safe WARN path
withHandler_errorNotConsumed_propagatesToTail Error types propagate to TailContext (not consumed)

WARN path log (activeStreams=1, channelActive=true)

2026-04-23 14:08:17,278       [main] WARN  com.azure.cosmos.implementation.http.Http2ParentChannelExceptionHandler - Exception on HTTP/2 parent connection [channel=[id: 0xembedded, L:embedded - R:embedded], activeStreams=1, channelActive=true, clientVmId=n/a]
java.io.IOException: Connection reset by peer
at com.azure.cosmos.implementation.http.Http2ParentChannelExceptionHandlerTest.withHandler_activeStreams_consumedAtWarn(Http2ParentChannelExceptionHandlerTest.java:149) ~[test-classes/:?]
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:577) ~[?:?]

Note: The !channelActive branch cannot be unit-tested with EmbeddedChannel because disconnect() tears down the pipeline before fireExceptionCaught can reach handlers. In production, exceptionCaught() fires while the channel is transitioning to inactive.

Impact

  • Handler only overrides exceptionCaught() — Netty @Skip optimization bypasses it for all hot-path events
  • Handler does NOT close the channel
  • Error types are propagated, not consumed
  • Exceptions with active streams on a live channel still log at WARN
  • Unknown stream count (codec unavailable) takes safe WARN path

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a Netty channel handler to suppress noisy “exceptionCaught reached tail of pipeline” WARN logs on HTTP/2 parent (TCP) connections in Cosmos’ Reactor Netty transport, while preserving WARN-level signal when exceptions may impact in-flight HTTP/2 streams.

Changes:

  • Install an HTTP/2 parent-channel exceptionCaught handler from ReactorNettyClient when HTTP/2 is enabled.
  • Add Http2ParentChannelExceptionHandler that consumes parent-channel exceptions and logs at DEBUG vs WARN based on active stream count and channel activity.
  • Add EmbeddedChannel-based unit tests covering exception consumption behavior, and update changelog entry.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/http/ReactorNettyClient.java Adds logic to install the new handler onto the HTTP/2 parent channel pipeline.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/http/Http2ParentChannelExceptionHandler.java New handler that consumes parent-channel exceptions and logs based on connection state.
sdk/cosmos/azure-cosmos/CHANGELOG.md Documents the fix in the unreleased section.
sdk/cosmos/azure-cosmos-tests/src/test/java/com/azure/cosmos/implementation/http/Http2ParentChannelExceptionHandlerTest.java New unit tests validating the handler’s exception consumption behavior.
sdk/cosmos/azure-cosmos-tests/pom.xml Enables surefire tests and includes trailing whitespace changes.

Comment thread sdk/cosmos/azure-cosmos/CHANGELOG.md Outdated
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

jeet1995 and others added 5 commits April 21, 2026 17:03
In HTTP/2, reactor-netty multiplexes streams on a shared parent TCP connection.
The parent channel pipeline has no ChannelOperationsHandler (unlike HTTP/1.1),
so TCP-level exceptions like Connection reset by peer (ECONNRESET) propagate to
Netty's TailContext, which logs them as WARN.

This adds Http2ParentChannelExceptionHandler to the parent channel via
doOnConnected (accessing channel.parent()). The handler consumes exceptions
at DEBUG level WITHOUT closing the channel or altering connection lifecycle,
matching HTTP/1.1 logging behavior.

Changes:
- Handler logs cause.toString() (not getMessage()) for null-safe diagnostics
- Defensive try-catch for duplicate handler name on concurrent stream creation
- Before/after verified with EmbeddedChannel unit tests

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…toString(), update changelog

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jeet1995 jeet1995 force-pushed the AzCosmos_Http2ParentChannelExceptionHandler branch from d68fa5c to 2a3b5b2 Compare April 21, 2026 21:05
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

@xinlian12
Copy link
Copy Markdown
Member

@sdkReviewAgent

Address Bhaskar's review: add two tests covering the else branch where
activeStreams > 0 on an active channel, exercising the WARN log path.

- withHandler_activeStreams_consumedAtWarn: creates an active H2 stream
  via codec.connection().local().createStream(), fires an exception, and
  verifies it is consumed (does not reach TailContext).
- withHandler_activeStreams_channelNotClosed: same setup, verifies the
  handler does not close the channel even with active streams.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@xinlian12
Copy link
Copy Markdown
Member

Review complete (32:05)

No new comments — existing review coverage is sufficient.

Steps: ✓ context, correctness, cross-sdk, design, history, past-prs, synthesis, test-coverage

When Http2FrameCodec is absent from the pipeline, getActiveStreamCount()
returns -1. Since -1 != 0 and channelActive == true, the handler takes
the safe WARN path. This test covers that fallback behavior.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines will not run the associated pipelines, because the pull request was updated after the run command was issued. Review the pull request again and issue a new run command.

@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Copy Markdown
Member

@FabianMeiswinkel FabianMeiswinkel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - Thanks!

…ug log in catch

- Change getActiveStreamCount() to return Integer (nullable) instead of
  int with -1 sentinel. null explicitly means 'could not determine' and
  takes the safe WARN path. (Addresses Fabian's review)
- Add logger.debug in catch block so codec retrieval failures are
  observable instead of silently swallowed.
- Add Error guard in exceptionCaught: Error types (OOM, SOF) propagate
  to TailContext instead of being consumed. (Addresses Xinlian's review)
- Add withHandler_errorNotConsumed_propagatesToTail test.
- Update Javadoc to reflect Exception-only consumption and Error passthrough.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

Copy link
Copy Markdown
Member

@xinlian12 xinlian12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks

In reactor-netty's H2 path, doOnConnected fires once per TCP connection
and connection.channel() IS the parent channel (channel.parent() is null).
The previous code assumed doOnConnected fires for child/stream channels
where channel.parent() would return the TCP parent.

Fix: resolve the H2 parent as channel.parent() ?? channel, handling both
the observed case (parent=null, channel IS the parent) and the alternate
case (parent!=null, install on parent).

Verified with integration test:
- Linux/epoll with TCP RST proxy (SO_LINGER=0, 30s idle timeout)
- 4.79.1 baseline: TailContext WARN appeared (Connection reset by peer)
- Fixed build: WARN suppressed, handler logged at DEBUG (activeStreams=0)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

doOnConnected fires for the parent TCP channel in reactor-netty's H2 path,
so connection.channel() IS the parent. No need for channel.parent() resolution.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Member

@FabianMeiswinkel FabianMeiswinkel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

jeet1995 and others added 3 commits April 23, 2026 11:14
- Add local/remote address to WARN and DEBUG log messages for
  diagnostic parity with RNTBD connection loggers
- Mark handler @ChannelHandler.Sharable with singleton INSTANCE
  (handler is stateless - no instance fields)
- Update ReactorNettyClient to use INSTANCE instead of new
- Update tests to use INSTANCE

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Matches PartitionProcessor/HealthChecker patterns - avoids SLF4J
inline formatting issues. Channel.toString() provides L:/R: addresses.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

jeet1995 and others added 2 commits April 23, 2026 11:51
Resolve vmId lazily via ClientTelemetry.getMachineId(null) on first
access from non-event-loop thread. Store as immutable field in the
@sharable handler singleton.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Remove lazy singleton pattern (getOrCreateInstance) that could call
ClientTelemetry.getMachineId() on the Netty event loop (5s blocking).
Instead, create handler eagerly in configureChannelPipelineHandlers()
which runs on the caller's setup thread. The @sharable handler instance
is captured by the doOnConnected lambda.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks

jeet1995 and others added 2 commits April 23, 2026 12:48
Remove all blocking calls. Add ClientTelemetry.getCachedMachineId()
which reads a volatile field populated by getMachineId() during client
init. Handler reads it at log time - pure volatile read, zero blocking.
Restores static INSTANCE singleton (handler is stateless again).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@mbhaskar
Copy link
Copy Markdown
Member

LGTM, Thanks

@jeet1995
Copy link
Copy Markdown
Member Author

/azp run java - cosmos - tests

@azure-pipelines
Copy link
Copy Markdown

Azure Pipelines successfully started running 1 pipeline(s).

@jeet1995 jeet1995 merged commit cb1519a into Azure:main Apr 24, 2026
103 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants